Multi-Feature Fusion-Guided Multiscale Bidirectional Attention Networks for Logistics Pallet Segmentation
نویسندگان
چکیده
In the smart logistics industry, unmanned forklifts that intelligently identify pallets can improve work efficiency in warehousing and transportation are better than traditional manual driven by humans. Therefore, they play a critical role warehousing, semantics segmentation is an effective method to realize intelligent identification of pallets. However, most current recognition algorithms ineffective due diverse types pallets, their complex shapes, frequent blockades production environments, changing lighting conditions. This paper proposes novel multi-feature fusion-guided multiscale bidirectional attention (MFMBA) neural network for pallet segmentation. To predict foreground category (the pallet) background cargo) image, our approach extracts three features (grayscale, texture, Hue, Saturation, Value features) fuses them. The architecture deals with problem size shape may appear different image actual, environment, which usually makes feature extraction difficult. Our study extract additional semantic features. Also, since mechanism only assigns rights from single direction, we designed cross-attention weights each two directions, horizontally vertically, significantly improving Finally, comparative experimental results show precision proposed algorithm 0.53%–8.77% other methods compared.
منابع مشابه
Confidence-Guided Sequential Label Fusion for Multi-atlas Based Segmentation
Label fusion is a key step in multi-atlas based segmentation, which combines labels from multiple atlases to make the final decision. However, most of the current label fusion methods consider each voxel equally and independently during label fusion. In our point of view, however, different voxels act different roles in the way that some voxels might have much higher confidence in label determi...
متن کاملSegmentation Guided Attention Networks for Visual Question Answering
In this paper we propose to solve the problem of Visual Question Answering by using a novel segmentation guided attention based network which we call SegAttendNet. We use image segmentation maps, generated by a Fully Convolutional Deep Neural Network to refine our attention maps and use these refined attention maps to make the model focus on the relevant parts of the image to answer a question....
متن کاملMultiscale Vessel-guided Airway Tree Segmentation
This paper presents a method for airway tree segmentation that uses a combination of a trained airway appearance model, vessel and airway orientation information, and region growing. The method uses a voxel classification based appearance model, which involves the use of a classifier that is trained to differentiate between airway and non-airway voxels. Vessel and airway orientation information...
متن کاملMultiscale High-Level Feature Fusion for Histopathological Image Classification
Histopathological image classification is one of the most important steps for disease diagnosis. We proposed a method for multiclass histopathological image classification based on deep convolutional neural network referred to as coding network. It can gain better representation for the histopathological image than only using coding network. The main process is that training a deep convolutiona...
متن کاملStacked Multiscale Feature Learning for Domain Independent Medical Image Segmentation
In this work we propose a feature-based segmentation approach that is domain independent. While most existing approaches are based on application-specific hand-crafted features, we propose a framework for learning features from data itself at multiple scales and depth. Our features can be easily integrated into classifiers or energy-based segmentation algorithms. We test the performance of our ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Cmes-computer Modeling in Engineering & Sciences
سال: 2022
ISSN: ['1526-1492', '1526-1506']
DOI: https://doi.org/10.32604/cmes.2022.019785